In mathematics, an orthogonal polynomial sequence is an infinite sequence of real polynomials
of one variable x, in which each pn has degree n, and such that any two different polynomials in the sequence are orthogonal to each other under a particular version of the L2 inner product.
The field of orthogonal polynomials developed in the late 19th century from a study of continued fractions by P. L. Chebyshev and was pursued by A.A. Markov and T.J. Stieltjes and by a few other mathematicians. Since then, applications have been developed in many areas of mathematics and physics.
The theory of orthogonal polynomials includes many definitions of orthogonality. In abstract notation, is convenient to write
when the polynomials p(x) and q(x) are orthogonal. A sequence of orthogonal polynomials, then, is a sequence of polynomials
such that pn has degree n and all distinct members of the sequence are orthogonal to each other.
The algebraic and analytic properties of the polynomials depend upon the specific assumptions about the operator . In the classical formulation, the operator is defined in terms of the integral of a weighted product (see below) and happens to be an inner product. Other formulations remove various assumptions, for example in the context of Hilbert spaces or non-Hermitian operators (see below). Most of the discussion in this article applies to the classical definition.
Let be an interval in the real line (where and are allowed). This is called the interval of orthogonality. Let
be a function on the interval, that is strictly positive on the interior , but which may be zero or go to infinity at the end points. Additionally, W must satisfy the requirement that, for any polynomial , the integral
is finite. Such a W is called a weight function.
Given any , , and W as above, define an operation on pairs of polynomials f and g by
This operation is an inner product on the vector space of all polynomials. It induces a notion of orthogonality in the usual way, namely that two polynomials are orthogonal if their inner product is zero.
Many alternative theories of orthogonal polynomials have been studied and, along with the classical theory, remain active areas of research.[1] Some aspects of the classical theory generalize when certain assumptions are lifted, and new properties can arise in different contexts.
In some theories the polynomials may act on other algebraic objects such as the complex numbers, matrices, and the unit circle (as a subset of the complex numbers).
Much of the general theory is for operators that satisfy the axioms of an inner product. This includes inner products within a Hilbert space (where the polynomials can be interpreted as an orthogonal basis) and inner products that can be defined as an integral of the form
where μ is a positive measure; this in turn includes the classical definition as well as the probabilistic definition (where the measure is a probability measure) and the discrete definition (where the integral is an infinite weighted sum).
The effects of lifting the inner product assumption of positive definiteness have also been studied (e.g. negative weights, discrete coefficients or non-Hermitian operators). In this theory, the terms system and sequence of orthogonal polynomials are distinct because pairs of polynomials of the same degree may be orthogonal.
For the remainder of this article the classical definition is assumed.
The chosen inner product induces a norm on polynomials in the usual way:
When making an orthogonal basis, one may be tempted to make an orthonormal basis, that is, one in which all basis elements have norm 1. For polynomials, this would often result in simple square roots in the coefficients. Instead, polynomials are often scaled in a way that mathematicians agree on, that makes the coefficients and other formulas simpler. This is called standardization. The "classical" polynomials listed below have been standardized, typically by setting their leading coefficients to some specific quantity, or by setting a specific value for the polynomial. This standardization has no mathematical significance; it is just a convention. Standardization also involves scaling the weight function in an agreed-upon way.
Denote by the square of the norm of :
The values of for the standardized classical polynomials are listed in the table below. In this notation,
where δmn is the Kronecker delta.
The simplest classical orthogonal polynomials are the Legendre polynomials, for which the interval of orthogonality is [−1, 1] and the weight function is simply 1:
These are all orthogonal over [−1, 1]; whenever m ≠ n,
The Legendre polynomials are standardized so that Pn(1) = 1 for all n.
The simplest non-classical orthogonal polynomials are the monomials
for n ≥ 0 which are orthogonal under the inner product defined by
This inner product cannot be defined in the classical sense as a weighted integral of a product, or even as a measure of a product (otherwise ). However the monomials are orthogonal on the unit circle as a subset of the complex numbers, using the path integral
where is the complex conjugate of . The properties of orthogonal polynomials on the unit circle differ from those of classical orthogonal polynomials (such as the form of the recurrence relations and the distribution of roots) and are related to the theory of Fourier series.
All orthogonal polynomial sequences have a number of elegant and fascinating properties. Before proceeding with them:
Lemma 1: Given an orthogonal polynomial sequence , any nth-degree polynomial S(x) can be expanded in terms of . That is, there are coefficients such that
Proof by mathematical induction. Choose so that the term of S(x) matches that of . Then is an (n − 1)th-degree polynomial. Continue downward.
The coefficients can be calculated directly using orthogonality. First multiply by and weight function , then integrate:
giving
Lemma 2: Given an orthogonal polynomial sequence, each of its polynomials is orthogonal to any polynomial of strictly lower degree.
Proof: Given n, any polynomial of degree n − 1 or lower can be expanded in terms of . Polynomial is orthogonal to each of them.
Each polynomial in an orthogonal sequence has minimal norm among all polynomials with the same degree and leading coefficient.
Using orthogonality, the squared norm of p(x) satisfies
An interpretation of this result is that orthogonal polynomials are minimal in a generalized least squares sense. For example, the classical orthogonal polynomials have a minimal weighted mean square value.
Any orthogonal sequence has a recurrence formula relating any three consecutive polynomials in the sequence:
The coefficients a, b, and c depend on n, as well as the standardization.
We will prove this for fixed n, and omit the subscripts on a, b, and c.
First, choose a so that the terms match, so we have
Next, choose b so that the terms match, so we have
Expand the right-hand-side in terms of polynomials in the sequence
Now if , then
But
so
Since the inner product is just an integral involving the product:
we have
If , then has degree , so it is orthogonal to ; hence , which implies for .
Therefore, only can be nonzero, so
Letting , we have
The values of , and can be worked out directly. Let and be the first and second coefficients of :
and be the inner product of with itself:
We have
Each polynomial in an orthogonal sequence has all n of its roots real, distinct, and strictly inside the interval of orthogonality.
This follows from the proof of interlacing of roots below. Here is a direct proof.
Let m be the number of places where the sign of Pn changes inside the interval of orthogonality, and let be those points. Each of those points is a root of Pn. By the fundamental theorem of algebra, m ≤ n. Now m might be strictly less than n if some roots of Pn are complex, or not inside the interval of orthogonality, or not distinct. We will show that m = n.
Let
This is an mth-degree polynomial that changes sign at each of the xj, the same way that Pn(x) does. S(x)Pn(x) is therefore strictly positive, or strictly negative, everywhere except at the xj. S(x)Pn(x)W(x) is also strictly positive or strictly negative except at the xj and possibly the end points.
Therefore, , the integral of this, is nonzero. But, by Lemma 2, Pn is orthogonal to any polynomial of lower degree, so the degree of S must be n.
The roots of each polynomial lie strictly between the roots of the next higher polynomial in the sequence.
First, standardize all of the polynomials so that their leading coefficients are positive. This will not affect the roots.
We use induction on n. Let n≥1 and let be the roots of . Assuming by induction that the roots of lie strictly between the , we find that the signs of alternate with j. Moreover , since the leading coeffient is positive and has no greater zero. To summarize, .
By the recurrence formula
with we conclude that . By the intermediate value theorem, has at least one zero between and for . Additionally, and the leading coefficient is posititive, so has an additional zero greater than . For a similar reason it has a zero less than , and the induction step is complete.
A very important class of orthogonal polynomials arises from a differential equation of the form
where Q is a given quadratic (at most) polynomial, and L is a given linear polynomial. The function f, and the constant λ, are to be found.
This is a Sturm-Liouville type of equation. Such equations generally have singularities in their solution functions f except for particular values of λ. They can be thought of a eigenvector/eigenvalue problems: Letting D be the differential operator, , and changing the sign of λ, the problem is to find the eigenvectors (eigenfunctions) f, and the corresponding eigenvalues λ, such that f does not have singularities and D(f) = λf.
The solutions of this differential equation have singularities unless λ takes on specific values. There is a series of numbers that lead to a series of polynomial solutions if one of the following sets of conditions are met:
These three cases lead to the Jacobi-like, Laguerre-like, and Hermite-like polynomials, respectively.
In each of these three cases, we have the following:
Because of the constant of integration, the quantity R(x) is determined only up to an arbitrary positive multiplicative constant. It will be used only in homogeneous differential equations (where this doesn't matter) and in the definition of the weight function (which can also be indeterminate.) The tables below will give the "official" values of R(x) and W(x).
Under the assumptions of the preceding section, Pn(x) is proportional to
This is known as Rodrigues' formula, after Olinde Rodrigues. It is often written
where the numbers en depend on the standardization. The standard values of en will be given in the tables below.
Under the assumptions of the preceding section, we have
(Since Q is quadratic and L is linear, and are constants, so these are just numbers.)
Let .
Then
Now multiply the differential equation
by R/Q, getting
or
This is the standard Sturm-Liouville form for the equation.
Let
Then
Now multiply the differential equation
by S/Q, getting
or
But , so
or, letting u = Sy,
Under the assumptions of the preceding section, let denote the rth derivative of . (We put the "r" in brackets to avoid confusion with an exponent.) is a polynomial of degree n − r. Then we have the following:
There are also some mixed recurrences. In each of these, the numbers a, b, and c depend on n and r, and are unrelated in the various formulas.
There are an enormous number of other formulas involving orthogonal polynomials in various ways. Here is a tiny sample of them, relating to the Chebyshev, associated Laguerre, and Hermite polynomials:
The differential equation for a particular λ may be written (omitting explicit dependence on x)
multiplying by yields
and reversing the subscripts yields
subtracting and integrating:
but it can be seen that
so that:
If the polynomials f are such that the term on the left is zero, and for , then the orthogonality relationship will hold:
for .
The class of polynomials arising from the differential equation described above have many important applications in such areas as mathematical physics, interpolation theory, the theory of random matrices, computer approximations, and many others. All of these polynomial sequences are equivalent, under scaling and/or shifting of the domain, and standardizing of the polynomials, to more restricted classes. Those restricted classes are the "classical orthogonal polynomials".
Because all polynomial sequences arising from a differential equation in the manner described above are trivially equivalent to the classical polynomials, the actual classical polynomials are always used.
The Jacobi-like polynomials, once they have had their domain shifted and scaled so that the interval of orthogonality is [−1, 1], still have two parameters to be determined. They are and in the Jacobi polynomials, written . We have and . Both and are required to be greater than −1. (This puts the root of L inside the interval of orthogonality.)
When and are not equal, these polynomials are not symmetrical about x = 0.
The differential equation
is Jacobi's equation.
For further details, see Jacobi polynomials.
When one sets the parameters and in the Jacobi polynomials equal to each other, one obtains the Gegenbauer or ultraspherical polynomials. They are written , and defined as
We have and . is required to be greater than −1/2.
(Incidentally, the standardization given in the table below would make no sense for α = 0 and n ≠ 0, because it would set the polynomials to zero. In that case, the accepted standardization sets instead of the value given in the table.)
Ignoring the above considerations, the parameter is closely related to the derivatives of :
or, more generally:
All the other classical Jacobi-like polynomials (Legendre, etc.) are special cases of the Gegenbauer polynomials, obtained by choosing a value of and choosing a standardization.
For further details, see Gegenbauer polynomials.
The differential equation is
This is Legendre's equation.
The second form of the differential equation is:
The recurrence relation is
A mixed recurrence is
Rodrigues' formula is
For further details, see Legendre polynomials.
The Associated Legendre polynomials, denoted where and are integers with , are defined as
The m in parentheses (to avoid confusion with an exponent) is a parameter. The m in brackets denotes the mth derivative of the Legendre polynomial.
These "polynomials" are misnamed -- they are not polynomials when m is odd.
They have a recurrence relation:
For fixed m, the sequence are orthogonal over [−1, 1], with weight 1.
For given m, are the solutions of
The differential equation is
This is Chebyshev's equation.
The recurrence relation is
Rodrigues' formula is
These polynomials have the property that, in the interval of orthogonality,
(To prove it, use the recurrence formula.)
This means that all their local minima and maxima have values of −1 and +1, that is, the polynomials are "level". Because of this, expansion of functions in terms of Chebyshev polynomials is sometimes used for polynomial approximations in computer math libraries.
Some authors use versions of these polynomials that have been shifted so that the interval of orthogonality is [0, 1] or [−2, 2].
There are also Chebyshev polynomials of the second kind, denoted
We have:
For further details, including the expressions for the first few polynomials, see Chebyshev polynomials.
The most general Laguerre-like polynomials, after the domain has been shifted and scaled, are the Associated Laguerre polynomials (also called Generalized Laguerre polynomials), denoted . There is a parameter , which can be any real number strictly greater than −1. The parameter is put in parentheses to avoid confusion with an exponent. The plain Laguerre polynomials are simply the version of these:
The differential equation is
This is Laguerre's equation.
The second form of the differential equation is
The recurrence relation is
Rodrigues' formula is
The parameter is closely related to the derivatives of :
or, more generally:
Laguerre's equation can be manipulated into a form that is more useful in applications:
is a solution of
This can be further manipulated. When is an integer, and :
is a solution of
The solution is often expressed in terms of derivatives instead of associated Laguerre polynomials:
This equation arises in quantum mechanics, in the radial part of the solution of the Schrödinger equation for a one-electron atom.
Physicists often use a definition for the Laguerre polynomials that is larger, by a factor of , than the definition used here.
For further details, including the expressions for the first few polynomials, see Laguerre polynomials.
The differential equation is
This is Hermite's equation.
The second form of the differential equation is
The third form is
The recurrence relation is
Rodrigues' formula is
The first few Hermite polynomials are
One can define the associated Hermite functions
Because the multiplier is proportional to the square root of the weight function, these functions are orthogonal over with no weight function.
The third form of the differential equation above, for the associated Hermite functions, is
The associated Hermite functions arise in many areas of mathematics and physics. In quantum mechanics, they are the solutions of Schrödinger's equation for the harmonic oscillator. They are also eigenfunctions (with eigenvalue (−i)n) of the continuous Fourier transform.
Many authors, particularly probabilists, use an alternate definition of the Hermite polynomials, with a weight function of instead of . If the notation He is used for these Hermite polynomials, and H for those above, then these may be characterized by
For further details, see Hermite polynomials.
The Gram–Schmidt process is an algorithm originally taken from linear algebra which removes linear dependency from a set of given vectors in an inner product space. The inner product as defined on all polynomials allows us to apply the Gram–Schmidt process to an arbitrary set of polynomials. The process removes linear dependencies from the polynomials, yielding sets of orthogonal polynomials. Given various initial polynomial sequences and weighting functions, different orthogonal polynomial sequences can be produced.
We define a projection operator on the polynomials as:
To apply the algorithm, we define our set of original polynomials and generate a sequence of orthogonal polynomials using:
If an orthonormal sequence is required, a polynomial normalization operation can be defined as:
Care must be taken if the process is implemented on computer as the Gram–Schmidt process is numerically unstable. However, as many computational platforms implement rational numbers with arbitrary-precision arithmetic the problem can often be easily avoided.
Let
be the moments of a measure μ. Then the polynomial sequence defined by
is a sequence of orthogonal polynomials with respect to the measure μ. To see this, consider the inner product of pn(x) with xk for any k < n. We will see that the value of this inner product is zero[2].
(The entry-by-entry integration merely says the integral of a linear combination of functions is the same linear combination of the separate integrals. It is a linear combination because only one row contains non-scalar entries.)
Thus pn(x) is orthogonal to xk for all k < n. That means this is a sequence of orthogonal polynomials for the measure μ.
Name, and conventional symbol | Chebyshev, | Chebyshev (second kind), |
Legendre, | Hermite, |
---|---|---|---|---|
Limits of orthogonality | ||||
Weight, | ||||
Standardization | Lead term = | |||
Square of norm, | ||||
Leading term, | ||||
Second term, | ||||
Constant in diff. equation, | ||||
Constant in Rodrigues' formula, | ||||
Recurrence relation, | ||||
Recurrence relation, | ||||
Recurrence relation, |
Name, and conventional symbol | Associated Laguerre, | Laguerre, |
---|---|---|
Limits of orthogonality | ||
Weight, | ||
Standardization | Lead term = | Lead term = |
Square of norm, | ||
Leading term, | ||
Second term, | ||
Constant in diff. equation, | ||
Constant in Rodrigues' formula, | ||
Recurrence relation, | ||
Recurrence relation, | ||
Recurrence relation, |
Name, and conventional symbol | Gegenbauer, | Jacobi, |
---|---|---|
Limits of orthogonality | ||
Weight, | ||
Standardization | if | |
Square of norm, | ||
Leading term, | ||
Second term, | ||
Constant in diff. equation, | ||
Constant in Rodrigues' formula, | ||
Recurrence relation, | ||
Recurrence relation, | ||
Recurrence relation, |